1,479 research outputs found

    Sensitivity and stability: A signal propagation sweet spot in a sheet of recurrent centre crossing neurons

    No full text
    In this paper we demonstrate that signal propagation across a laminar sheet of recurrent neurons is maximised when two conditions are met. First, neurons must be in the so-called centre crossing configuration. Second, the network’s topology and weights must be such that the network comprises strongly coupled nodes, yet lies within the weakly coupled regime. We develop tools from linear stability analysis with which to describe this regime, and use them to examine the apparent tension between the sensitivity and instability of centre crossing networks

    Neural complexity: a graph theoretic interpretation

    No full text
    One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end Tononi et al. [Proc. Nat. Acad. Sci. USA 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system's dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular we explicitly establish a dependency of neural complexity on cyclic graph motifs

    Learning action-oriented models through active inference

    Get PDF
    Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms

    Nonmodular architectures of cognitive systems based on active inference

    Get PDF
    In psychology and neuroscience it is common to describe cognitive systems as input/output devices where perceptual and motor functions are implemented in a purely feedforward, open-loop fashion. On this view, perception and action are often seen as encapsulated modules with limited interaction between them. While embodied and enactive approaches to cognitive science have challenged the idealisation of the brain as an input/output device, we argue that even the more recent attempts to model systems using closed-loop architectures still heavily rely on a strong separation between motor and perceptual functions. Previously, we have suggested that the mainstream notion of modularity strongly resonates with the separation principle of control theory. In this work we present a minimal model of a sensorimotor loop implementing an architecture based on the separation principle. We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent. These forces can be seen as variables that an agent cannot directly control, i.e., a perturbation from the environment or an interference caused by other agents. As an alternative approach inspired by embodied cognitive science, we then propose a nonmodular architecture based on active inference. We demonstrate the robustness of this architecture to unknown external inputs and show that the mechanism with which this is achieved in linear models is equivalent to integral control

    Transient dynamics between displaced fixed points: an alternate nonlinear dynamical framework for olfaction

    Get PDF
    Significant insights into the dynamics of neuronal populations have been gained in the olfactory system where rich spatio-temporal dynamics is observed during, and following, exposure to odours. It is now widely accepted that odour identity is represented in terms of stimulus-specific rate patterning observed in the cells of the antennal lobe (AL). Here we describe a nonlinear dynamical framework inspired by recent experimental findings which provides a compelling account of both the origin and the function of these dynamics. We start by analytically reducing a biologically plausible conductance based model of the AL to a quantitatively equivalent rate model and construct conditions such that the rate dynamics are well described by a single globally stable fixed point (FP). We then describe the AL's response to an odour stimulus as rich transient trajectories between this stable baseline state (the single FP in absence of odour stimulation) and the odour-specific position of the single FP during odour stimulation. We show how this framework can account for three phenomena that are observed experimentally. First, for an inhibitory period often observed immediately after an odour stimulus is removed. Second, for the qualitative differences between the dynamics in the presence and the absence of odour. Lastly, we show how it can account for the invariance of a representation of odour identity to both the duration and intensity of an odour stimulus. We compare and contrast this framework with the currently prevalent nonlinear dynamical framework of 'winnerless competition' which describes AL dynamics in terms of heteroclinic orbit

    The modularity of action and perception revisited using control theory and active inference

    Get PDF
    The assumption that action and perception can be investigated independently is entrenched in theories, models and experimental approaches across the brain and mind sciences. In cognitive science, this has been a central point of contention between computationalist and 4Es (enactive, embodied, extended and embedded) theories of cognition, with the former embracing the “classical sandwich”, modular, architecture of the mind and the latter actively denying this separation can be made. In this work we suggest that the modular independence of action and perception strongly resonates with the separation principle of control theory and furthermore that this principle provides formal criteria within which to evaluate the implications of the modularity of action and perception. We will also see that real-time feedback with the environment, often considered necessary for the definition of 4Es ideas, is not however a sufficient condition to avoid the “classical sandwich”. Finally, we argue that an emerging framework in the cognitive and brain sciences, active inference, extends ideas derived from control theory to the study of biological systems while disposing of the separation principle, describing non-modular models of behaviour strongly aligned with 4Es theories of cognition
    corecore